Setup

Loading libraries and data.

Visualize New Members

Data: analytics/s3_data_activity/discord/raw_data

New Members: S1 and S2

Showing organic grow

## Parsed with column specification:
## cols(
##   interval_start_timestamp = col_datetime(format = ""),
##   discovery_joins = col_double(),
##   invites = col_double(),
##   vanity_joins = col_double()
## )
## Parsed with column specification:
## cols(
##   interval_start_timestamp = col_datetime(format = ""),
##   discovery_joins = col_double(),
##   invites = col_double(),
##   vanity_joins = col_double()
## )

Visualize Total Member

Total Members: S1 and S2

Up and to the right

## Parsed with column specification:
## cols(
##   interval_start_timestamp = col_datetime(format = ""),
##   total_membership = col_double()
## )
## Parsed with column specification:
## cols(
##   interval_start_timestamp = col_datetime(format = ""),
##   total_membership = col_double()
## )

Visualize First Day Activation

First Day Activation: S1 and S2

Voice participation went up in Season 2:

## Parsed with column specification:
## cols(
##   interval_start_timestamp = col_datetime(format = ""),
##   new_members = col_double(),
##   pct_communicated = col_double(),
##   pct_opened_channels = col_double()
## )
## Parsed with column specification:
## cols(
##   interval_start_timestamp = col_datetime(format = ""),
##   new_members = col_double(),
##   pct_communicated = col_double(),
##   pct_opened_channels = col_double()
## )

Visualize Next Week Retention

## Warning: Removed 2 row(s) containing missing values (geom_path).

Total Retention: S1 and S2

Retention has not improved that much, but we have more new members.

## Parsed with column specification:
## cols(
##   interval_start_timestamp = col_datetime(format = ""),
##   new_members = col_double(),
##   pct_retained = col_double()
## )
## Warning: 2 parsing failures.
## row col  expected    actual                                                                  file
##  15  -- 3 columns 2 columns '../s2_data_activity/discord/raw_data/next-week-retention_290921.csv'
##  16  -- 3 columns 2 columns '../s2_data_activity/discord/raw_data/next-week-retention_290921.csv'
## Parsed with column specification:
## cols(
##   interval_start_timestamp = col_datetime(format = ""),
##   new_members = col_double(),
##   pct_retained = col_double()
## )
## Warning: 2 parsing failures.
## row col  expected    actual                                            file
##  15  -- 3 columns 2 columns './discord/raw_data/guild-retention_010122.csv'
##  16  -- 3 columns 2 columns './discord/raw_data/guild-retention_010122.csv'
## Warning: Removed 2 row(s) containing missing values (geom_path).

Visualize Visitors & Communicators

Total Visitor and Communicator: S1 and S2

Still below benchmark, but trending upward slight in season 2:

## Parsed with column specification:
## cols(
##   interval_start_timestamp = col_datetime(format = ""),
##   visitors = col_double(),
##   pct_communicated = col_double()
## )
## Parsed with column specification:
## cols(
##   interval_start_timestamp = col_datetime(format = ""),
##   visitors = col_double(),
##   pct_communicated = col_double()
## )

Visualize Message - Avg Messages

Total Message - Avg Message: S1 and S2

Average Messages going down

## Parsed with column specification:
## cols(
##   interval_start_timestamp = col_datetime(format = ""),
##   messages = col_double(),
##   messages_per_communicator = col_double()
## )
## Parsed with column specification:
## cols(
##   interval_start_timestamp = col_datetime(format = ""),
##   messages = col_double(),
##   messages_per_communicator = col_double()
## )

Visualize Voice Activity

Total Voice Activity: S1 and S2

## Parsed with column specification:
## cols(
##   interval_start_timestamp = col_datetime(format = ""),
##   speaking_minutes = col_double()
## )
## Parsed with column specification:
## cols(
##   interval_start_timestamp = col_datetime(format = ""),
##   speaking_minutes = col_double()
## )

Snapshot Votes: Season 0 and Season 1: Transform data

## Parsed with column specification:
## cols(
##   address = col_character(),
##   choice = col_double(),
##   balance = col_double(),
##   timestamp = col_double(),
##   dateUtc = col_character(),
##   authorIpfsHash = col_character()
## )
## Parsed with column specification:
## cols(
##   address = col_character(),
##   choice = col_double(),
##   balance = col_double(),
##   timestamp = col_double(),
##   dateUtc = col_character(),
##   authorIpfsHash = col_character()
## )
## Parsed with column specification:
## cols(
##   address = col_character(),
##   choice = col_double(),
##   balance = col_double(),
##   timestamp = col_double(),
##   dateUtc = col_character(),
##   authorIpfsHash = col_character(),
##   choice.2 = col_double()
## )
## Warning: 554 parsing failures.
## row col  expected    actual                                                                 file
##   1  -- 7 columns 6 columns '../s2_data_activity/snapshot/raw_data/snapshot-report-12414778.csv'
##   2  -- 7 columns 6 columns '../s2_data_activity/snapshot/raw_data/snapshot-report-12414778.csv'
##   3  -- 7 columns 6 columns '../s2_data_activity/snapshot/raw_data/snapshot-report-12414778.csv'
##   4  -- 7 columns 6 columns '../s2_data_activity/snapshot/raw_data/snapshot-report-12414778.csv'
##   5  -- 7 columns 6 columns '../s2_data_activity/snapshot/raw_data/snapshot-report-12414778.csv'
## ... ... ......... ......... ....................................................................
## See problems(...) for more details.
## Parsed with column specification:
## cols(
##   address = col_character(),
##   choice = col_double(),
##   balance = col_double(),
##   timestamp = col_double(),
##   dateUtc = col_character(),
##   authorIpfsHash = col_character()
## )
## Parsed with column specification:
## cols(
##   address = col_character(),
##   choice = col_double(),
##   balance = col_double(),
##   timestamp = col_double(),
##   dateUtc = col_character(),
##   authorIpfsHash = col_character(),
##   choice.1 = col_double()
## )
## Warning: 425 parsing failures.
## row col  expected    actual                                                                 file
##   1  -- 7 columns 6 columns '../s2_data_activity/snapshot/raw_data/snapshot-report-12655510.csv'
##   2  -- 7 columns 6 columns '../s2_data_activity/snapshot/raw_data/snapshot-report-12655510.csv'
##   3  -- 7 columns 6 columns '../s2_data_activity/snapshot/raw_data/snapshot-report-12655510.csv'
##   4  -- 7 columns 6 columns '../s2_data_activity/snapshot/raw_data/snapshot-report-12655510.csv'
##   5  -- 7 columns 6 columns '../s2_data_activity/snapshot/raw_data/snapshot-report-12655510.csv'
## ... ... ......... ......... ....................................................................
## See problems(...) for more details.
## Parsed with column specification:
## cols(
##   address = col_character(),
##   choice = col_double(),
##   balance = col_double(),
##   timestamp = col_double(),
##   dateUtc = col_character(),
##   authorIpfsHash = col_character()
## )
## Parsed with column specification:
## cols(
##   address = col_character(),
##   choice = col_double(),
##   balance = col_double(),
##   timestamp = col_double(),
##   dateUtc = col_character(),
##   authorIpfsHash = col_character()
## )
## Parsed with column specification:
## cols(
##   address = col_character(),
##   choice = col_double(),
##   balance = col_double(),
##   timestamp = col_double(),
##   dateUtc = col_character(),
##   authorIpfsHash = col_character()
## )
## Parsed with column specification:
## cols(
##   address = col_character(),
##   choice = col_double(),
##   balance = col_double(),
##   timestamp = col_double(),
##   dateUtc = col_character(),
##   authorIpfsHash = col_character()
## )
## Parsed with column specification:
## cols(
##   address = col_character(),
##   choice = col_double(),
##   balance = col_double(),
##   timestamp = col_double(),
##   dateUtc = col_character(),
##   authorIpfsHash = col_character()
## )

Snapshot Votes: Season 0 and Season 1: Visualize data

Note: See above for data wrangling and transformation.

Transform & Wrangle Snapshot Votes: Season 2

Data from DaoDash: Snapshot S2 query.

Data: analytics/s3_data_activity/snapshot/raw_data

## Parsed with column specification:
## cols(
##   voter = col_character(),
##   choice = col_double(),
##   created = col_double(),
##   vote_id = col_character(),
##   title = col_character(),
##   proposal_id = col_character()
## )

Visualize Snapshot Votes: Season 2

Use: snapshot_votes_s2_fct, still pending 5 more snapshot vote events

  • Funding for Season 2 Approved Projects
  • BanklessDAO Season 3 Specification
  • Firming Up Governance
  • Season 3 Grants Committee Elections
  • Season 3 Project and Guild Funding

Snapshot Votes across Season 0 - 2

Combine data sets

## Parsed with column specification:
## cols(
##   voter = col_character(),
##   choice = col_double(),
##   created = col_double(),
##   vote_id = col_character(),
##   title = col_character(),
##   proposal_id = col_character()
## )

Grants Committee Fund Distribution: Stacked Bar Chart

Note: For comparison between S1 and S2 where available.

SQL Query: See DaoDash Fund_Flow_GC_2. Note dates between S1 and S2. Same query used for Sankey.

NOTE: May need to add Treasury Guild as another potential source of funds (confirm Treasury address).

# --Season 1 (116 rows): timestamp_display < '2021-10-08'::DATE AND timestamp_display > '2021-06-14'::DATE  -- TOTAL: 11,288,361.28706
# --Season 2 (426 rows): timestamp_display > '2021-10-07'::DATE AND timestamp_display < '2022-01-08'::DATE  -- TOTAL: 10,780,178.00001

#WITH fund_flow AS (
#select 
#  'Out' as Direction,
#   ssb.from_address,
#   w.human_readable,
#   w.entity_type,
#   ssb.amount_display,
#   ssb.timestamp_display,
#   ssb.to_address
#from stg_subgraph_bank_1 ssb
#join public.bankless_wallet_entity_2 w on lower(ssb.from_address) = lower(w.wallet_address)
#-- note 'Out' is from_address
   
#union all 
   
#select 
#  'In' as Direction,
#  ssb.from_address,
#  w.human_readable,
#  w.entity_type,
#  ssb.amount_display,
#  ssb.timestamp_display,
#  ssb.to_address
#from stg_subgraph_bank_1 ssb
#join public.bankless_wallet_entity_2 w on lower(ssb.to_address) = lower(w.wallet_address)
#-- note 'In' is to_address
#),

#gcs1 AS (
#SELECT
#  direction,
#  from_address,
#  human_readable AS readable,
#  entity_type,
#  amount_display,
#  timestamp_display,
#  to_address
#FROM fund_flow 
#WHERE entity_type = 'Grants Committee'
#AND timestamp_display > '2021-10-07'::DATE  
#AND timestamp_display < '2022-01-08'::DATE
#AND direction = 'Out'
#--where timestamp_display > '2021-10-01T09:26:59'::TIMESTAMP 
#ORDER BY timestamp_display DESC
#)

#SELECT
#  g.direction,
#  g.from_address,
#  g.readable,
#  g.entity_type,
#  g.amount_display,
#  g.timestamp_display,
#  g.to_address,
#  b.human_readable 
#FROM gcs1 g 
#LEFT JOIN bankless_wallet_entity_2 b ON lower(g.to_address) = lower(b.wallet_address)

Visualization

## 
## Attaching package: 'scales'
## The following object is masked from 'package:purrr':
## 
##     discard
## The following object is masked from 'package:readr':
## 
##     col_factor
## Parsed with column specification:
## cols(
##   direction = col_character(),
##   from_address = col_character(),
##   readable = col_character(),
##   entity_type = col_character(),
##   amount_display = col_double(),
##   timestamp_display = col_datetime(format = ""),
##   to_address = col_character(),
##   human_readable = col_character()
## )
## Parsed with column specification:
## cols(
##   direction = col_character(),
##   from_address = col_character(),
##   readable = col_character(),
##   entity_type = col_character(),
##   amount_display = col_double(),
##   timestamp_display = col_datetime(format = ""),
##   to_address = col_character(),
##   human_readable = col_character()
## )
## `summarise()` regrouping output by 'human_readable' (override with `.groups` argument)

Bankless Vault Fund Distribution

Note: In Season 1, Analytics Guild received funds from the Grants Committee address, in Season 2, the Analytics Guild received funds from the Bankless Vault address.

See DaoDash: Fund_Flow_Bankless_Vault (nearly identical to Fund_Flow_GC_2, except direction is ‘In’).

## Parsed with column specification:
## cols(
##   direction = col_character(),
##   from_address = col_character(),
##   readable = col_character(),
##   entity_type = col_character(),
##   amount_display = col_double(),
##   timestamp_display = col_datetime(format = ""),
##   to_address = col_character(),
##   human_readable = col_character()
## )
## Parsed with column specification:
## cols(
##   direction = col_character(),
##   from_address = col_character(),
##   readable = col_character(),
##   entity_type = col_character(),
##   amount_display = col_double(),
##   timestamp_display = col_datetime(format = ""),
##   to_address = col_character(),
##   human_readable = col_character()
## )
## `summarise()` regrouping output by 'human_readable' (override with `.groups` argument)

## `summarise()` regrouping output by 'human_readable' (override with `.groups` argument)

Example Sankey

Grants Committee Season 1 Sankey

Workflow

Step 1: Replace NA values. Note: NA values step from having new projects, guilds pop up after the creation of bankless_wallet_entity table in DaoDash, so be sure to update that table before proceeding.

Step 2: =create a my_links table, Step 2a: a my_nodes table, Step 2b: create IDSource, IDTarget columns Step 2c: create my_color object.

Step 3: Create sankey w/ SankeyNetwork() function and library.

NOTE For comparison between Grants Comm distribution S1 vs S2, use side-by-side bar chart.

## Parsed with column specification:
## cols(
##   direction = col_character(),
##   from_address = col_character(),
##   readable = col_character(),
##   entity_type = col_character(),
##   amount_display = col_double(),
##   timestamp_display = col_datetime(format = ""),
##   to_address = col_character(),
##   human_readable = col_character()
## )
## Links is a tbl_df. Converting to a plain data frame.

Grants Committee Season 2 Sankey

NOTE: Need to consult this sheet to add additional project, guild multisig & address to bankless_wallet_entity table in DAODash.

## Parsed with column specification:
## cols(
##   direction = col_character(),
##   from_address = col_character(),
##   readable = col_character(),
##   entity_type = col_character(),
##   amount_display = col_double(),
##   timestamp_display = col_datetime(format = ""),
##   to_address = col_character(),
##   human_readable = col_character()
## )
## Links is a tbl_df. Converting to a plain data frame.
## Nodes is a tbl_df. Converting to a plain data frame.

REVISED Sankey: Grants Committee Vault, Season 1

NOTE: full_grants <- rbind(gcs1a, gcs2a, bv_s1a, bv_s2a)

  • full_grants combines grants committee + bankless_vault, across s1 and s2
  • start with full_grants
## # A tibble: 27 x 1
##    name                          
##    <chr>                         
##  1 BountyBoard                   
##  2 A/V Guild                     
##  3 Bankless Academy              
##  4 Analytics Guild               
##  5 Degen                         
##  6 Bankless Loans                
##  7 Writers Guild                 
##  8 Devs Guild Multisig           
##  9 Translators Guild             
## 10 Treasury Guild                
## 11 Ops Guild                     
## 12 Design Guild                  
## 13 Research Guild                
## 14 Marketing Guild               
## 15 First Quest                   
## 16 Bankless Website Multisig     
## 17 Liquity Project Multisig      
## 18 S1 Coordinape                 
## 19 BED Index                     
## 20 BizDev Guild                  
## 21 DAO Punks                     
## 22 Crypto Sapiens                
## 23 Legal Guild                   
## 24 Education Guild Multisig      
## 25 DevOps Infrastructure Multisig
## 26 Balancer Multisig             
## 27 Misc
## Links is a tbl_df. Converting to a plain data frame.

REVISED Sankey: Grants Committee Vault, Season 2

## # A tibble: 34 x 1
##    name                                      
##    <chr>                                     
##  1 BountyBoard                               
##  2 A/V Guild                                 
##  3 DaoDash                                   
##  4 FightClub                                 
##  5 Content Gateway                           
##  6 Writers Guild                             
##  7 Marketing Guild                           
##  8 S1 Coordinape                             
##  9 DAO Punks                                 
## 10 Legal Guild                               
## 11 Balancer Liquidity Mining Program Multisig
## 12 International Media Node Multisig         
## 13 Book Club Multisig                        
## 14 Podcast Hatchery Multisig                 
## 15 Flipper Zone Multisig                     
## 16 Misc                                      
## 17 Bankless Academy                          
## 18 Analytics Guild                           
## 19 Degen                                     
## 20 Bankless Loans                            
## 21 NewsletterTeam                            
## 22 Devs Guild Multisig                       
## 23 Translators Guild                         
## 24 Treasury Guild                            
## 25 Ops Guild                                 
## 26 Design Guild                              
## 27 Research Guild                            
## 28 First Quest                               
## 29 Bankless Website Multisig                 
## 30 Liquity Project Multisig                  
## 31 BED Index                                 
## 32 BizDev Guild                              
## 33 Education Guild Multisig                  
## 34 DevOps Infrastructure Multisig
## Links is a tbl_df. Converting to a plain data frame.

Number of Bank Holders (Membership Categories); S1 vs S2

Task: Compare S1 vs S2 on all membership categories.

Challenge: Inconsistent results using very similar queries across DaoDash, Dune Analytics and Google BigQuery. Use Dune Analytics for now, reconcile sources in S3.

NOTE: Cummulative

See mem_1_10K_bank_holdings_time_range and other segments downloaded in raw csv. Data gathered from DAODash table:

## 
## Attaching package: 'lubridate'
## The following objects are masked from 'package:base':
## 
##     date, intersect, setdiff, union
## 
## Attaching package: 'zoo'
## The following objects are masked from 'package:base':
## 
##     as.Date, as.Date.numeric
## Parsed with column specification:
## cols(
##   day = col_datetime(format = ""),
##   address = col_character(),
##   sum_balance = col_double()
## )

BANK Holders 10 - 35K

## Parsed with column specification:
## cols(
##   day = col_datetime(format = ""),
##   address = col_character(),
##   sum_balance = col_double()
## )

BANK Holders 35 - 150K

## Parsed with column specification:
## cols(
##   day = col_datetime(format = ""),
##   address = col_character(),
##   sum_balance = col_double()
## )

BANK Holders 150K - 1.5M

## Parsed with column specification:
## cols(
##   day = col_datetime(format = ""),
##   address = col_character(),
##   sum_balance = col_double()
## )

Coordinape Distribution

Task: Query comparison of tokens/bank distributed via Coordinape in S1 vs S2.

Challenge: See DaoDash query, Seasonal Coordinape Distribution

Request: Reach out to Saul for coordinape network visual growth s1 to s2.

Create dataframe / tibble.

## Parsed with column specification:
## cols(
##   id = col_double(),
##   coord_id = col_double(),
##   recipient_id = col_double(),
##   sender_id = col_double(),
##   tokens = col_double(),
##   new_timestamp = col_datetime(format = "")
## )
## Parsed with column specification:
## cols(
##   id = col_double(),
##   coord_id = col_double(),
##   recipient_id = col_double(),
##   sender_id = col_double(),
##   tokens = col_double(),
##   new_timestamp = col_datetime(format = "")
## )
## # A tibble: 1 x 1
##   sum_tokens
##        <dbl>
## 1     192076
## # A tibble: 1 x 1
##   sum_tokens
##        <dbl>
## 1     425561
## # A tibble: 302 x 2
##    sender_id     n
##        <dbl> <int>
##  1      2225   286
##  2      2202   278
##  3      2075   274
##  4      2178   271
##  5      2167   211
##  6      2240   128
##  7      2268   120
##  8      2056   112
##  9      2173   102
## 10      2267   100
## # … with 292 more rows
## # A tibble: 1 x 1
##       n
##   <int>
## 1   302
## # A tibble: 1 x 1
##       n
##   <int>
## 1   458
## # A tibble: 1 x 1
##       n
##   <int>
## 1   416
## # A tibble: 1 x 1
##       n
##   <int>
## 1   481

POAP Distribution

Task: Find number of POAPs claimed for Community Calls S1 vs S2

Progress: Query Content Gateway API endpoint:

https://prod-content-gateway-api.herokuapp.com/api/v1/graphql

Query historical -> PoapV1 -> PoapTokenV1s

Can query all events containing “Bankless DAO Community” (for community call poaps), but still need to find number of poaps claimed at each event.

Query: Combine GC API and programmatically get tokens distributed for each event.

## Parsed with column specification:
## cols(
##   id = col_double(),
##   name = col_character(),
##   mint_count = col_character()
## )
## Warning: Problem with `mutate()` input `mint_count_2`.
## x NAs introduced by coercion
## ℹ Input `mint_count_2` is `as.numeric(mint_count)`.
## Warning in mask$eval_all_mutate(dots[[i]]): NAs introduced by coercion
## Warning: Removed 4 rows containing missing values (position_stack).

Query from Content Gateway API Endpoint. https://prod-content-gateway-api.herokuapp.com/api/v1/graphql

NOTE: Used POAP.xyz subgraph API

Bounty Board

Task: Query total number of bounties and total value committed to bounties between S1 v S2.

Challenge: query from database in prod. WIP.

Number of bounties: S1: 40, S2: 66+

Value committed to bounties: S1: 68K, S2: 209K

Update: 15/01/22 for numbers

Discourse

Task: Connect to API to query forum posts with polls and number of people voting.

Challenge: API documentation does not mention Polls or Votes. See documentation here.

Progress: Established API connection with the pydiscourse https://github.com/BanklessDAO/analytics/blob/main/discourse_forum/api_connection.py

Ended up downloading CSV manually of all users, instead of votes, will track other engagement metrics like: - topics_entered - posts_read_count - time_read - topic_count

note: remove email column, then save raw data to github

First Quest

Task: Query number of new members going through First Quest

Challenge: Data inserted at 2021-11-18, might have insufficient data to make S1 vs S2 comparison.

Just show S2 First Quest data standalone (compare Funnel shape from two time periods).

Tips Given & Received

Task: Get comparison between S1 and S2, create tibble/dataframe to display data.

Data: See DaoDash query: Seasonal Tip Distribution

Season 1 Tip Distribution (2873 rows, 983,019 tips given), WHERE timestamp < ‘2021-10-08’::DATE AND timestamp > ‘2021-06-14’::DATE

Season 2 Tip Distribution (1336 rows, 337,173 tips given), WHERE timestamp > ‘2021-10-07’::DATE AND timestamp < ‘2022-01-08’::DATE

Next Section

Next Section